Home |
| Latest | About | Random
# 30 Determinants: Part 2. In this part we will discuss some computational aspects of the three main ways of computing determinants: 1. By Leibniz entrywise formula. 2. By Laplace cofactor expansion. 3. By elementary row operations. Each of the methods offer its own advantage and in practice one use a combination of these methods to compute determinant. And once we established the elementary row operations method of computing determinants, we can establish three important facts: 1. Determinant is multiplicative, that $\det(AB)=\det(A)\det(B)$ for $n\times n$ matrices $A,B$. 2. If a matrix $A$ is invertible, the $\det(A)\neq0$, and $\det(A^{-1})=\frac{1}{\det(A)}$. 3. The determinant function is unique. ## Leibniz entrywise formula. Leibniz entrywise formula is an alternating sum of product of certain entries in the matrix: > **(Leibniz)** For an $n\times n$ matrix $A$, where the $(i,j)$-th entry of $A$ is $a_{i,j}$ then $$ \det(A) = \sum_{\sigma \in \mathfrak S_{n}} \text{sign}(\sigma) a_{1,\sigma(1)} a_{2,\sigma(2)} \cdots a_{n,\sigma(n)}. $$ But what does this mean? First breaking it down : ![[smc-spring-2024-math-13/linear-algebra-notes/---files/30-determinants-part-2 2024-04-03 18.26.16.excalidraw.svg]] %%[[smc-spring-2024-math-13/linear-algebra-notes/---files/30-determinants-part-2 2024-04-03 18.26.16.excalidraw.md|🖋 Edit in Excalidraw]]%% And since $a_{i,j}$ is the $i$-th row, $j$-th column entry of $A$, we have $$ a_{k,\sigma(k)} \quad\text{is the \(k\)-th row, \(\sigma(k)\)-th column entry of \(A\)} $$In other words, $\color{blue} a_{1,\sigma(1)} a_{2,\sigma(2)}\cdots a_{n,\sigma(n)}$ means: - look at the first row, take the $\sigma(1)$-th entry; - look at the second row, take the $\sigma(2)$-th entry; - ... ; - look at the $n$-th row, take the $\sigma(n)$-th entry, - And multiply all of them together. So, for example if the permutation $\sigma = 51324 \in \mathfrak S_{4}$, then we want the product $\color{blue} a_{1,5} a_{2,1}a_{3,3}a_{4,2}a_{5,4}$. If we indicate them on a $5 \times 5$ matrix, they are as follows: ![[smc-spring-2024-math-13/linear-algebra-notes/---files/30-determinants-part-2 2024-04-03 18.50.46.excalidraw.svg]] %%[[smc-spring-2024-math-13/linear-algebra-notes/---files/30-determinants-part-2 2024-04-03 18.50.46.excalidraw.md|🖋 Edit in Excalidraw]]%% And since $\color{green}\text{sign}(51324)=-1$, we need to remember to multiply by this factor. So this is one of the terms in this Leibniz formula. **Observation.** Recall how a **rook** in chess work, notices that above is a situation where you have five rooks, each in a position so that **none of them are attacking each other**! Let us illustrate the Leibniz determinant formula for a $3\times 3$ matrix $A$: ![[smc-spring-2024-math-13/linear-algebra-notes/---files/30-determinants-part-2 2024-04-03 19.01.15.excalidraw.svg]] %%[[smc-spring-2024-math-13/linear-algebra-notes/---files/30-determinants-part-2 2024-04-03 19.01.15.excalidraw.md|🖋 Edit in Excalidraw]]%% So, we see that > The determinant of a square matrix is just some alternating ($\pm 1$) sum of product of entries in the matrix that occupies the positions of $n$ rooks that are not attacking each other on an $n\times n$ chessboard. **Remark.** Using Leibniz formula on an $n\times n$ matrix would in general give $n!$ many terms in the sum! This blows up quickly as $n$ increases. However, it can be useful if the matrix has a lot of zeros, and we can in fact use this idea to prove some properties of determinants. **Example.** Consider the matrix $A = \begin{pmatrix}3 & 0 & 0 & 1 \\ 0 & 1 & -2 & 0 \\ 0 & 4 & 3 & 0 \\ -5 & 2 & 1 & -1\end{pmatrix}$, compute its determinant, $\det(A)$. $\blacktriangleright$ Imagine using Leibniz formula on this, there would be $4! = 24$ terms. But among these $24$ terms, many terms are zero. This happens when we pick entries that land on zero. So, let us think about how to place 4 rooks on this $4\times 4$ chessboard so that - they are not attacking each other, and - they don't land in a zero, because that would just give us a term that is 0. Now, by being careful we see that there is only four such patterns that will be nonzero: ![[smc-spring-2024-math-13/linear-algebra-notes/---files/30-determinants-part-2 2024-04-03 19.26.08.excalidraw.svg]] %%[[smc-spring-2024-math-13/linear-algebra-notes/---files/30-determinants-part-2 2024-04-03 19.26.08.excalidraw.md|🖋 Edit in Excalidraw]]%% So, after accounting the signs of the permutations, we have $$ \det(A) = + (3)(1)(3)(-1) - (3)(-2)(4)(-1) + (1)(-2)(4)(-5) - (1)(1)(3)(-5) = 22. \quad\blacklozenge $$ If we extend this idea, we can derive some useful result of determinants of matrices with certain patterns: > **Proposition.** > Let $A$ be an $n\times n$ square matrix. > (0) $\det(A) = \det(A^{T})$. > (1) If $A$ is a diagonal matrix, then $\det(A) =$ product of the diagonal entries. > (2) If $A$ is a triangular matrix, then $\det(A) =$ product of the diagonal entries. We proved (0) in the last section, but imagine flipping the chessboard across the main diagonal, you get the same sets of rook patterns, with the same signs. Let us show argue why (1) and (2) are true. First recall the shapes of these matrices: ![[smc-spring-2024-math-13/linear-algebra-notes/---files/30-determinants-part-2 2024-04-04 00.17.11.excalidraw.svg]] %%[[smc-spring-2024-math-13/linear-algebra-notes/---files/30-determinants-part-2 2024-04-04 00.17.11.excalidraw.md|🖋 Edit in Excalidraw]]%% $\blacktriangleright$ If $A$ is a diagonal matrix, then everything **not** on the diagonal is zero. The only way to put $n$ rooks that are not attacking each other **and avoiding the zero** is if you put the rooks all on the diagonal. So Leibniz formula reduces to just one term in the sum, namely the product of the diagonal. If $A$ is upper or lower triangular matrix, then if you have $n$ rooks not attacking each other where one of the rook is not on the diagonal, then it forces at least one other rook to be in the region where it is zero. So again Leibniz formula reduces to just one term, the product of the diagonal. $\blacklozenge$ **Example.** Note this determinant of a triangular matrix: $$\det\begin{pmatrix}3 & 0 & 0 \\ 1 & -2 & 0 \\2 & 9 & 8\end{pmatrix} = (3)(-2)(8). \quad\blacklozenge$$ **Remark.** Be careful of what is actually a diagonal matrix or triangular matrix! However, you can adapt it similarly to "other shapes" if you account the signs correctly. **Example.** Note this matrix is **not triangular** by our definition: $$ A = \begin{pmatrix}0 & 0 & 3 \\ 0 & -2 & 1 \\2 & 9 & 8\end{pmatrix} $$But using non-attacking rook patterns reveals only one pattern is relevant. Accounting for the sign of this pattern, which is negative, gives $\det(A) = -(3)(-2)(2)=12$. $\blacklozenge$ **Remark.** It takes some practice to use this Leibniz / non-attacking rook pattern method. I encourage you to try. Always try a new way to think about things. An important but subtle observation one can also make here is this: > From Leibniz determinant formula, we see that **calculating determinant never requires division, just addition, subtraction, and multiplication.** ## Laplace cofactor expansion. Next, we introduce a **recursive method** of computing determinant. First we define a new notation. For an $n\times n$ matrix $A$, let us write $A\setminus(i,j)$ to be the matrix obtained from $A$ by deleting the $i$-th row and deleting the $j$-th column. So $A\setminus (i,j)$ is an $(n-1)\times (n-1)$ matrix. **Example.** If we have $$ A = \begin{pmatrix}1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix} $$then for example we would have$$ A\setminus(1,2) = \begin{pmatrix}4 & 6 \\ 7 & 9\end{pmatrix}\quad\text{and}\quad A\setminus(3,1) = \begin{pmatrix}2 & 3\\5 & 6\end{pmatrix}.\quad\blacklozenge $$ We now give Laplace cofactor expansion. We first give the following: > **(Laplace expansion, along the $1$-st row)** > $$ \det(A) = \sum_{k=1}^{n} (-1)^{1+k} a_{1,k} \det (A\setminus (1,k)). $$ We illustrate Laplace expansion along the 1-st row on a $3\times 3$ matrix to calculate its determinant: ![[smc-spring-2024-math-13/linear-algebra-notes/---files/30-determinants-part-2 2024-04-04 05.16.22.excalidraw.svg]] %%[[smc-spring-2024-math-13/linear-algebra-notes/---files/30-determinants-part-2 2024-04-04 05.16.22.excalidraw.md|🖋 Edit in Excalidraw]]%% **Example.** Let $$ A = \begin{pmatrix}4 & 2 & -2 \\ 3 & 1 & 5 \\ 7 & -3 & 4\end{pmatrix} $$Applying Laplace cofactor expansion along the $1$-st row, we get $$ \begin{align*} \det(A) & = + (4)\det(A \setminus (1,1)) - (2)\det(A \setminus (1,2)) + (-2)\det(A \setminus (1,3)) \\ & = +(4) \det \begin{pmatrix}1 & 5 \\ -3 & 4\end{pmatrix} - (2) \det \begin{pmatrix}3 & 5\\7 & 4\end{pmatrix} + (-2) \det \begin{pmatrix}3 & 1 \\ 7 & -3\end{pmatrix} \\ & = 4(19) - 2 (-23) -2 (-16) \\ & = 154. \quad\blacklozenge \end{align*} $$ One can also perform Laplace cofactor expansion along any row or any column: > **(Laplace expansion, along the $i$-th row.)** $$ \det(A) = \sum_{k=1}^{n} (-1)^{i+k} a_{i,k} \det (A\setminus (i,k)). $$**(Laplace expansion, along the $j$-th column.)** $$ \det(A) = \sum_{k=1}^{n} (-1)^{k + j} a_{k,j} \det (A\setminus (k,j)). $$ **Remark.** The quantity $C_{i,j}=(-1)^{i+j}\det(A\setminus(i,j))$ is sometimes called the **$(i,j)$-th cofactor of $A$**, hence the name Laplace cofactor expansion. So we can also rephrase the formulas as: - Laplace expansion, along the $i$-th row $$\det(A) = \sum_{k=1}^{n} a_{ik}C_{ik}$$ - Laplace expansion, along the $j$-th column $$\det(A) = \sum_{k=1}^{n} a_{kj}C_{kj}$$ **Remark.** Don't forget the signs! You can use a checkerboard of signs to remind you what additional factor of $+1$ or $-1$ we need, starting with the top left as $+$ : $$ \begin{matrix}+ & - & + & - & \\ - & + & - & + & \cdots \\ + & - & + & - & \\ - & + & - & + \\ & \vdots & & \end{matrix} $$ **Remark.** If we only use Laplace cofactor expansion recursively, an $n\times n$ determinant turns into $n$ many $(n-1)\times(n-1)$ determinants, and each of those $(n-1)\times(n-1)$ determinants become $n-1$ many $(n-2)\times(n-2)$ determinants, and so on. So this end up with about $n!$ calculations to do! A similar "problem" as the Leibniz formula. However, it can be useful still if the matrix has a row or column with a lot of zeros, or be used to prove results of determinants. **Example.** Compute the determinant of the following matrix $$ A = \begin{pmatrix}4 & 2 & 6 & 3 & 0 \\ 3 & 4 & 0 & 0 & 3 \\ 1 & 0 & 2 & 0 & 5 \\ 5 & 0 & 1 & 0 & 3 \\ 0 & 7 & 2 & 5 & 2\end{pmatrix}. $$ $\blacktriangleright$ Note that the third column has $3$ zeros, so let us expand down the third column, which we can continue to expand along columns with more zeros: $$ \begin{align*} \det \begin{pmatrix}4 & 2 & 6 & 3 & 0 \\ 3 & 4 & 0 & 0 & 3 \\ 1 & 0 & 2 & 0 & 5 \\ 5 & 0 & 1 & 0 & 3 \\ 0 & 7 & 2 & 5 & 2\end{pmatrix} & =-3 \det \begin{pmatrix} 3 & 4 & 0 & 3 \\ 1 & 0 & 2 & 5 \\ 5 & 0 & 1 & 3 \\ 0 & 7 & 2 & 2\end{pmatrix} -5\det \begin{pmatrix}4 & 2 & 6 & 0 \\ 3 & 4 & 0 & 3 \\ 1 & 0 & 2 & 5 \\ 5 & 0 & 1 & 3 \\ \end{pmatrix} \\ & = -3\left(-4 \det\begin{pmatrix}1 & 2 & 5 \\ 5 & 1 & 3 \\ 0 & 2 & 2\end{pmatrix}+7\det \begin{pmatrix}3 & 0 & 3 \\ 1 & 2 & 5 \\ 5 & 1 & 3\end{pmatrix}\right) \\ &\ \ \ \ \ \ \ -5\left(-2\det\begin{pmatrix}3 & 0 & 3 \\ 1 & 2 & 5 \\ 5 & 1 & 3\end{pmatrix} +4 \det\begin{pmatrix}4 & 6 & 0 \\ 1 & 2 & 5 \\ 5 & 1 & 3\end{pmatrix} \right). \end{align*} $$Usually we do this until we get to matrices small enough to which we can use a formula (or you can also continue). Since we do have $3\times 3$ determinant formula, we can compute each and finish this calculation: $$ \begin{align*} \det(A) & =-3(-4(2+50-20-6)+7(18+3-30-15)) \\ & \ \ \ \ \ \ \ \ \ \ -5(-2(18+3-30-15)+4(24+150-18-20)) \\ & = -2144. \quad\blacklozenge \end{align*} $$ **Remark.** Pay attention the signs! And we see that we get more and more terms (a factorial explosion), which we have to be careful. In practice we use a combination of methods to calculate determinants. We will see another one next. But before we move on, again a subtle observation: > From Laplace cofactor expansion, we see that **calculating determinant never requires division, just addition, subtraction, and multiplication.** ## Elementary row operations to compute determinants. If we keep track of the effects of how elementary row operations affect the determinant of a matrix, we can use it to compute determinants. Here is our claim > **Proposition.** > Let $A$ be an $n\times n$ matrix. Suppose an elementary row operation $\epsilon$ is applied to matrix $A$ and we obtain $\tilde A$, namely $A \xrightarrow{\epsilon}\tilde A$. Then we know there is an elementary matrix $E$ such that $EA = \tilde A$. Then $$ \det(EA) = \mu \det(A) $$where the factor $\mu$ is a scalar factor of the effect of $\epsilon$: $$ \begin{array}{l} \text{If \(\epsilon\) is a row swap, then \(\mu=-1\)} \\ \text{If \(\epsilon\) is a scaling a row by \(\lambda\), then \(\mu=\lambda\)} \\ \text{If \(\epsilon\) is a row replacement, then \(\mu=1\)} \\ \end{array} $$ Here we remind the reader that - a row swap is $R_{i} \leftrightarrow R_{j}$ where $i\neq j$, - scaling a row by $\lambda$ is $R_{i}\to \lambda R_{j}$, - and replacement is $R_{i}\to R_{i}+ \lambda R_{j}$, where $i\neq j$. **Remark.** It is important to note what a replacement is, and not misuse it. And replacement is the best operation here as it does not change the determinant. This effect of elementary row operations can be proved easily using the characterizing definition of the determinant (multilinear, antisymmetric, and normalizing at identity), which we do at the end. First let us record how would one use it to compute determinant. The goal is to apply these elementary steps and obtain another matrix that you can compute its determinant easily (say triangular). > Suppose we perform a sequence of elementary row operations $\epsilon_{i}$ on a square matrix $A$ to obtain $\tilde A$, that $$ A\xrightarrow{\epsilon_{1}}A_{1}\xrightarrow{\epsilon_{2}}A_{2}\xrightarrow{\epsilon_{3}}\cdots \xrightarrow{\epsilon_{m}}\tilde A, $$then we have elementary matrices $E_{i}$ corresponding to $\epsilon_{i}$ such that $$ E_{m}\cdots E_{3}E_{2}E_{1}A=\tilde A $$Then applying the proposition repeatedly we get $$ \mu_{m}\cdots\mu_{3}\mu_{2}\mu_{1}\det(A)=\det(\tilde A), $$where $\mu_{i}$ is the scalar factor corresponding to the elementary row operation $\epsilon_{i}$. And once we established above, we can solve for $\det(A)$ from $\det(\tilde A)$. **Example.** Compute $\det(A)$ where $$ A = \begin{pmatrix}4 & 3 & 0 & 2 \\ 5 & 3 & 2 & 1 \\ 0 & 1& 2 & 3 \\ 3 & 2 & 4 & 1\end{pmatrix} $$ $\blacktriangleright$ We compute $\det(A)$ by applying elementary row operations to $A$ to obtain a triangular matrix $\tilde A$ (say echelon form) and keeping track of the steps. And for illustration purposes, we show different kinds of steps here: $$ \begin{array}{ccl} A = \begin{pmatrix}4 & 3 & 0 & 2 \\ 5 & 3 & 2 & 1 \\ 0 & 1& 2 & 3 \\ 3 & 2 & 4 & 1\end{pmatrix} & \xrightarrow[\text{scale}]{R_{1}\to \frac{1}{4} R_{1}} & \begin{pmatrix}1 & \frac{3}{4} & 0 & \frac{1}{2} \\ 5 & 3 & 2 & 1 \\ 0 & 1& 2 & 3 \\ 3 & 2 & 4 & 1\end{pmatrix} \\ &\xrightarrow[\text{replacement}]{R_{2}\to R_{2}-5 R_{1}} & \begin{pmatrix} 1 & \frac{3}{4} & 0 & \frac{1}{2} \\ 0 & -\frac{3}{4} & 2 & -\frac{3}{2} \\ 0 & 1& 2 & 3 \\ 3 & 2 & 4 & 1\end{pmatrix} \\ & \xrightarrow[\text{replacement}]{R_{4}\to R_{4}-3 R_{1}} & \begin{pmatrix} 1 & \frac{3}{4} & 0 & \frac{1}{2} \\ 0 & -\frac{3}{4} & 2 & -\frac{3}{2} \\ 0 & 1& 2 & 3 \\ 0 & -\frac{1}{4} & 4 & -\frac{1}{2}\end{pmatrix} \\ & \xrightarrow[\text{swap}]{R_{2}\leftrightarrow R_{3}} & \begin{pmatrix} 1 & \frac{3}{4} & 0 & \frac{1}{2} \\ 0 & 1& 2 & 3 \\ 0 & -\frac{3}{4} & 2 & -\frac{3}{2} \\ 0 & -\frac{1}{4} & 4 & -\frac{1}{2}\end{pmatrix} \\ & \xrightarrow[\text{scale}]{R_{3}\to 4R_{3}} & \begin{pmatrix} 1 & \frac{3}{4} & 0 & \frac{1}{2} \\ 0 & 1& 2 & 3 \\ 0 & -3 & 8 & -6 \\ 0 & -\frac{1}{4} & 4 & -\frac{1}{2}\end{pmatrix} \\ & \xrightarrow[\text{scale}]{R_{4}\to 4R_{4}} & \begin{pmatrix} 1 & \frac{3}{4} & 0 & \frac{1}{2} \\ 0 & 1& 2 & 3 \\ 0 & -3 & 8 & -6 \\ 0 & -1& 16 & -2\end{pmatrix} \\ & \xrightarrow[\text{replacement}]{R_{3}\to R_{3}+3R_{2}} & \begin{pmatrix} 1 & \frac{3}{4} & 0 & \frac{1}{2} \\ 0 & 1& 2 & 3 \\ 0 & 0 & 14 & 3 \\ 0 & -1& 16 & -2\end{pmatrix} \\ & \xrightarrow[\text{replacement}]{R_{4}\to R_{4}+R_{2}} & \begin{pmatrix} 1 & \frac{3}{4} & 0 & \frac{1}{2} \\ 0 & 1& 2 & 3 \\ 0 & 0 & 14 & 3 \\ 0 & 0& 18 & 1\end{pmatrix} \\ & \xrightarrow[\text{replacement}]{R_{4}\to R_{4} - \frac{18}{14} R_{3}} & \begin{pmatrix} 1 & \frac{3}{4} & 0 & \frac{1}{2} \\ 0 & 1& 2 & 3 \\ 0 & 0 & 14 & 3 \\ 0 & 0& 0 & -\frac{40}{14}\end{pmatrix} = \tilde A \end{array} $$ We stop at the final matrix $\tilde A$ as it is triangular, whose determinant is just $\det(\tilde A) = -40$. Now we keep track the effects applied to $\det(A)$, and as the scaling factor of replacements is $1$, we can ignore them, so we get $$ (4)(4)(-1)(\frac{1}{4}) \det(A) = \det(\tilde A) = -40 $$and thus $$ \det(A) = 10. \quad\blacklozenge $$ With elementary row operation method of computing determinants, we can establish an important property of determinant, that determinant is multiplicative: > **Product property of determinant (determinant is multiplicative).** > If $A,B$ are both $n\times n$ matrices. Then $$ \det(AB)=\det(A)\det(B). $$ $\blacktriangleright$ Proof. We prove it by considering two cases, when $A$ is invertible and when $A$ is not. Suppose $A$ is not invertible. Then $AB$ is also not invertible. Indeed, if to the contrary that $AB$ is invertible, then there exists some matrix $C$ such that $(AB)C=I$ and $C(AB)=I$. But this just says, using associativity of matrix product, $A(BC)=I$, which means $(BC)$ is an inverse of $A$ (and recall for square matrices, one-sided inverse implies it is the inverse), contradicting $A$ is not invertible. So, since $A$ and $AB$ both not invertible, we know $\det(A)=0$ and $\det(AB)=0$. Hence $$ \det(AB)=\det(A)\det(B). $$Next, suppose $A$ is invertible. Then recall we can write $A$ as a product of some elementary row matrices $E_{i}$ times the identity matrix, say $$ A = E_{1}E_{2}\cdots E_{m}I $$Then we have $$ \det(A)=\det(E_{1}E_{2}\cdots E_{m}I)=\mu_{1}\mu_{2}\cdots\mu_{n}\det(I)=\mu_{1}\mu_{2}\cdots\mu_{m} $$where $\mu_{i}$ is the corresponding scalar factor associated to the elementary row matrix $E_{i}$. But then we also have $$ \begin{align*} \det(AB) & =\det(E_{1}E_{2}\cdots E_{m}I B) \\ & =\mu_{1}\mu_{2}\cdots\mu_{m}\det(IB) \\ & =\det(A)\det(B), \end{align*} $$as claimed. $\blacksquare$ With the multiplicative property of determinants established, we can now prove that if a matrix is invertible, then its determinant is not zero. > **Proposition.** > If a square matrix $A$ is invertible, then $\det(A) \neq 0$, And $$ \det(A^{-1}) = \frac{1}{\det(A)} $$. $\blacktriangleright$ Proof. If $A$ is invertible, then there exists some matrix $A^{-1}$ such that $AA^{-1}=I$. By multiplicative property, we have $$ \det(AA^{-1})=\det(I) \implies\det(A)\det(A^{-1})=1, $$and if to the contrary that $\det(A)=0$ then we would not get $1$ on the right hand side. So $\det(A)\neq0$, and solving we see that $\det(A^{-1})= \frac{1}{\det(A)}$, done! $\blacksquare$ **Remark.** Generally, it would take about $\approx n^{3}$ elementary row operations to turn an $n\times n$ matrix into a triangular matrix, so it is "less steps" than say Leibniz formula or the recursive Laplace expansion. So in principle it is faster. So is there a catch? Yes. In doing elementary row operations you might have to do **division**, and when implementing division on actual machine and computers there will be a stability issue, as computers can only have so much precision. ## Proof of the elementary row operation effects on determinants, and why determinant is unique. Let us prove that elementary row operations have the claimed effects on determinants in the previous section. $\blacktriangleright$ Proof. Let $A,\tilde A$ be $n\times n$ matrices. Suppose $EA=\tilde A$ where $E$ is the elementary row matrix associated with a row swap, then by the antisymmetric property we have $$ \det(EA)=\det(\tilde A)=-\det(A). $$ Suppose $EA=\tilde A$ where $E$ is the elementary row matrix associated with scaling a row by $\lambda$, then by multilinearity property we have $$ \det(EA) = \det(\tilde A) = \lambda \det(A). $$ Suppose $EA=\tilde A$ where $E$ is the elementary row matrix associated with row replacement, where $R_{i}\to R_{i}+\lambda R_{j}$ with $i\neq j$, then by looking at the rows of $A$ and $\tilde A$, they are: $$ A=\begin{pmatrix}R_{1} \\ \vdots \\ R_{i} \\ \vdots \\ R_{j} \\ \vdots \\ R_{n}\end{pmatrix},\quad\text{and}\quad \tilde A=\begin{pmatrix}R_{1} \\ \vdots \\ R_{i} +\lambda R_{j}\\ \vdots \\ R_{j} \\ \vdots \\ R_{n}\end{pmatrix} $$So by multilinear property of determinant, we have $$ \det(\tilde A)= \det\begin{pmatrix}R_{1} \\ \vdots \\ R_{i}+\lambda R_{j} \\ \vdots \\ R_{j} \\ \vdots \\ R_{n}\end{pmatrix}=\underbrace{\det\begin{pmatrix}R_{1} \\ \vdots \\ R_{i} \\ \vdots \\ R_{j} \\ \vdots \\ R_{n}\end{pmatrix}}_{\det(A)}+\lambda\underbrace{\det\begin{pmatrix}R_{1} \\ \vdots \\ R_{j} \\ \vdots \\ R_{j} \\ \vdots \\ R_{n}\end{pmatrix}}_{0}=\det(A), $$so the determinant is unchanged under row replacement! $\blacksquare$ With this we can establish that the determinant function satisfying (1) it is multilinear, (2) antisymmetric, and (3) normalizing at the identity is unique. $\blacktriangleright$ Proof. Suppose $d$ and $d'$ are two scalar functions on $n\times n$ matrices such that they both satisfy (1) multilinearity on the rows, (2) antisymmetric on the rows, and (3) normalizing $I$ at $1$. We show $d=d'$ by showing for any $n\times n$ matrix $A$, $d(A)=d'(A)$. By elementary row operations, we row reduce $A$ to its reduced row echelon form (RREF) $\tilde A$, and say we have $$ E_{m}\cdots E_{1}A = \tilde A $$where $E_{i}$ are the elementary row matrices. Now applying $d$ and $d'$, since they respect elementary row operations, we have $$ d(E_{m}\cdots E_{1}A) = d(\tilde A) $$and$$ d'(E_{m}\cdots E_{1}A) = d'(\tilde A). $$But for each elementary row matrix, $d'$ and $d$ have the same effects, so we have $$ \mu_{m}\cdots \mu_{1}d(A)=d(\tilde A) $$and $$ \mu_{m}\cdots\mu_{1}d'(A)=d'(\tilde A), $$where each $\mu_{i}$ is some nonzero factor. Now, if $A$ is **invertible**, its reduced row echelon form (RREF) is just the identity matrix, $\tilde A = I$, so this shows $$ \mu_{m}\cdots\mu_{1}d(A)=1 \implies d(A)= \frac{1}{\mu_{1}\cdots\mu_{m}} $$and $$ \mu_{m}\cdots\mu_{1}d'(A)=1 \implies d'(A)= \frac{1}{\mu_{1}\cdots\mu_{m}}=d(A). $$And if $A$ is **not invertible** (singular), its reduced row echelon form $\tilde A$ will have a row of zeros, as it will have strictly less than $n$ pivots. So by properties of determinants, $d(\tilde A)=0$ and $d'(\tilde A)=0$, whence $d(A)=d'(A)=0$. In conclusion, $d(A)=d'(A)$ for any square matrix $A$, so the determinant function is unique! $\blacksquare$